629 research outputs found

    Delivering Bio-Mems & Microfluidic Education Around Accessible Technologies

    Get PDF
    Electronic Systems are now being deployed in al-most all aspects of daily life as opposed to being confined to consumer electronics, computing, communication and control applications as was the case in the 90’s. One of the more significant growth areas is medical instrumentation, health care, bio-chemical analysis and environmental monitoring. Most of these applications will in the future require the integration of fluidics and biology within complex electronic systems. We are now seeing technologies emerging together with access services such as the FP6 “INTEGRAMplus” and “MicroBuilder” programs that offer competitive solutions for companies wishing to de-sign and prototype microfluidic systems. For successful deployment of these systems, a new breed of electronic engineers are needed that understand how to deliver bio-chemistry and living cells to transducers and integrate the required technologies reliably into robust systems. This paper will report on initial training initiatives now active under the INTEGRAMplus program

    Self-Paced Multi-Task Learning

    Full text link
    In this paper, we propose a novel multi-task learning (MTL) framework, called Self-Paced Multi-Task Learning (SPMTL). Different from previous works treating all tasks and instances equally when training, SPMTL attempts to jointly learn the tasks by taking into consideration the complexities of both tasks and instances. This is inspired by the cognitive process of human brain that often learns from the easy to the hard. We construct a compact SPMTL formulation by proposing a new task-oriented regularizer that can jointly prioritize the tasks and the instances. Thus it can be interpreted as a self-paced learner for MTL. A simple yet effective algorithm is designed for optimizing the proposed objective function. An error bound for a simplified formulation is also analyzed theoretically. Experimental results on toy and real-world datasets demonstrate the effectiveness of the proposed approach, compared to the state-of-the-art methods

    Analysis of Thermal Environment in a Hospital Operating Room

    Get PDF
    AbstractThis paper presents a computational fluid dynamics (CFD) study for thermal comfort in a hospital operating room. The research aims to analyze indoor thermal comfort using the predicted mean vote (PMV) model which has been presented by ISO7730. The room model includes a patient lying on an operating table with a surgical staff of six members standing around under surgical lights. The airflow is supplied to the room from the ceiling diffuser and exhausted through low-level side walls on both sides. Solutions of distribution of airflow velocity, temperature, relative humidity and so on are presented and discussed. The PMV and PPD are calculated for assessing thermal comfort based on TCM model. The simulation results show that the values of PMV and PPD in some parts of human body are not within the standard acceptable range defined by ISO, but its comfortableness satisfies China national standard GB/T18049 request. It is found that TCM model is a more comprehensive model for thermal comfort analysis

    Regression analysis of mixed sparse synchronous and asynchronous longitudinal covariates with varying-coefficient models

    Full text link
    We consider varying-coefficient models for mixed synchronous and asynchronous longitudinal covariates, where asynchronicity refers to the misalignment of longitudinal measurement times within an individual. We propose three different methods of parameter estimation and inference. The first method is a one-step approach that estimates non-parametric regression functions for synchronous and asynchronous longitudinal covariates simultaneously. The second method is a two-step approach in which synchronous longitudinal covariates are regressed with the longitudinal response by centering the synchronous longitudinal covariates first and, in the second step, the residuals from the first step are regressed with asynchronous longitudinal covariates. The third method is the same as the second method except that in the first step, we omit the asynchronous longitudinal covariate and include a non-parametric intercept in the regression analysis of synchronous longitudinal covariates and the longitudinal response. We further construct simultaneous confidence bands for the non-parametric regression functions to quantify the overall magnitude of variation. Extensive simulation studies provide numerical support for the theoretical findings. The practical utility of the methods is illustrated on a dataset from the ADNI study.Comment: 57 pages, 5 figure

    Techniques For Accelerating Large-Scale Automata Processing

    Get PDF
    The big-data era has brought new challenges to computer architectures due to the large-scale computation and data. Moreover, this problem becomes critical in several domains where the computation is also irregular, among which we focus on automata processing in this dissertation. Automata are widely used in applications from different domains such as network intrusion detection, machine learning, and parsing. Large-scale automata processing is challenging for traditional von Neumann architectures. To this end, many accelerator prototypes have been proposed. Micron\u27s Automata Processor (AP) is an example. However, as a spatial architecture, it is unable to handle large automata programs without repeated reconfiguration and re-execution. We found a large number of automata states are never enabled in the execution but still configured on the AP chips, leading to its underutilization. To address this issue, we proposed a lightweight offline profiling technique to predict the never-enabled states and keep them out of the AP. Furthermore, we develop SparseAP, a new execution mode for AP to handle the misprediction efficiently. Our software and hardware co-optimization obtains 2.1x speedup over the baseline AP execution across 26 applications. Since the AP is not publicly available, we aim to reduce the performance gap between a general-purpose accelerator---Graphics Processing Unit (GPU) and AP. We identify excessive data movement in the GPU memory hierarchy and propose optimization techniques to reduce the data movement. Although our optimization techniques significantly alleviate these memory-related bottlenecks, a side effect of them is the static assignment of work to cores. This leads to poor compute utilization as GPU cores are wasted on idle automata states. Therefore, we propose a new dynamic scheme that effectively balances compute utilization with reduced memory usage. Our combined optimizations provide a significant improvement over the previous state-of-the-art GPU implementations of automata. Moreover, they enable current GPUs to outperform the AP across several applications while performing within an order of magnitude for the rest of them. To make automata processing on GPU more generic to tasks with different amounts of parallelism, we propose AsyncAP, a lightweight approach that scales with the input length. Threads run asynchronously in AsyncAP, alleviating the bottleneck of thread block synchronization. The evaluation and detailed analysis demonstrate that AsyncAP achieves significant speedup or at least comparable performance under various scenarios for most of the applications. The future work aims to design automatic ways to generate optimizations and mappings between automata and computation resources for different GPUs. We will broaden the scope of this dissertation to domains such as graph computing
    • …
    corecore